Fix empty-target self-distillation loss to stay connected to model graph#5572
Open
walawalagoose wants to merge 2 commits intohuggingface:mainfrom
Open
Fix empty-target self-distillation loss to stay connected to model graph#5572walawalagoose wants to merge 2 commits intohuggingface:mainfrom
walawalagoose wants to merge 2 commits intohuggingface:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR fixes an edge case in the experimental SDPO/self-distillation path where an empty self-distillation batch returns a standalone zero tensor instead of a zero loss connected to the model forward graph.
In
distillation_onlymode, a batch can legitimately contain no valid self-distillation targets, for example when no rollout in the current batch exceedssuccess_reward_thresholdand no environment feedback is used. In the current implementation, this path returns:Although this tensor has
requires_grad=True, it is not connected to model parameters. Under DeepSpeed ZeRO-2, callingbackward()on such a graph-disconnected loss can fail during gradient reduction.This PR fixes the issue by keeping the self-distillation loss connected to the student/teacher forward graph even when the effective target mask is empty, so the loss remains numerically zero but backward stays safe.
This behavior is also closer to the original verl SDPO implementation, where an all-zero mask yields a zero loss through masking rather than through an early return.
A representative failure mode is:
SDPOTrainersdpo_policy_loss_mode="distillation_only"include_environment_feedback=FalseIn that setup, DeepSpeed can fail during backward/all-reduce because the returned zero loss is graph-disconnected.
Before submitting
AI writing disclosure
We welcome the use of AI tools to help with contributions. For transparency and to help us improve our review process, please indicate the level of AI involvement in this PR.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Note
Medium Risk
Changes loss computation behavior in the experimental self-distillation path by always running model forwards even when the target mask is empty, which can affect training performance/metrics but should improve backward stability (e.g., under ZeRO).
Overview
Fixes an edge case in
SelfDistillationMixin._compute_self_distillation_losswhere batches with an all-zeroresponse_maskpreviously returned a standalonetorch.tensor(0.0).The loss is now always computed via the normal masked aggregation so it stays connected to the student/teacher forward graph for safe
backward()/reduction, and a newself_distillation/empty_target_batchmetric is logged to track when this happens.Reviewed by Cursor Bugbot for commit ee61598. Bugbot is set up for automated code reviews on this repo. Configure here.